Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Coded elastic computing enables virtual machines to be preempted for high-priority tasks while allowing new virtual machines to join ongoing computation seamlessly. This paper addresses coded elastic computing for matrix-matrix multiplications with straggler tolerance by encoding both storage and download using Lagrange codes. In 2018, Yang et al. introduced the first coded elastic computing scheme for matrix-matrix multiplications, achieving a lower computational load requirement. However, this scheme lacks straggler tolerance and suffers from high upload cost. Zhong et al. (2023) later tackled these shortcomings by employing uncoded storage and Lagrange-coded download. However, their approach requires each machine to store the entire dataset. This paper introduces a new class of elastic computing schemes that utilize Lagrange codes to encode both storage and download, achieving a reduced storage size. The proposed schemes efficiently mitigate both elasticity and straggler effects, with a storage size reduced to a fraction 1/L of Zhong et al.'s approach, at the expense of doubling the download cost. Moreover, we evaluate the proposed schemes on AWS EC2 by measuring computation time under two different tasks allocations: heterogeneous and cyclic assignments. Both assignments minimize computation redundancy of the system while distributing varying computation loads across machines.more » « lessFree, publicly-accessible full text available June 22, 2026
-
Coded elastic computing, introduced by Yang et al. in 2018, is a technique designed to mitigate the impact of elasticity in cloud computing systems, where machines can be preempted or be added during computing rounds. This approach utilizes maximum distance separable (MDS) coding for both storage and download in matrix-matrix multiplications. The proposed scheme is unable to tolerate stragglers and has high encoding complexity and upload cost. In 2023, we addressed these limitations by employing uncoded storage and Lagrange-coded download. However, it results in a large storage size. To address the challenges of storage size and upload cost, in this paper, we focus on Lagrange-coded elastic computing based on uncoded download. We propose a new class of elastic computing schemes, using Lagrange-coded storage with uncoded download (LCSUD). Our proposed schemes address both elasticity and straggler challenges while achieving lower storage size, reduced encoding complexity, and upload cost compared to existing methods.more » « lessFree, publicly-accessible full text available June 22, 2026
-
This paper studies information theoretic secure aggregation in federated learning, which involves K distributed nodes and a central server. For security, the server can only recover aggregated updates of locally trained models, without any other information about the local users’ data being leaked. The secure aggregation process typically consists of two phases: the key sharing phase and the model aggregation phase. In previous research, a constraint on keys known as “uncoded groupwise keys” was introduced, and we adopt this constraint during the key sharing phase, where each set of S -users shares an independent key. During the model aggregation phase, each user transmits its encrypted model results to the server. To tolerate user dropouts in secure aggregation (i.e., some users may not respond), where up to K−U users may drop out and the identity of the surviving users is unpredictable in advance, at least two rounds of transmission are required in the model aggregation phase. In the first round, users send the masked models. Then, in the second round, based on the identity of the surviving users after the first round, these surviving users send additional messages that assist the server in decrypting the sum of the users’ trained models. Our goal is to minimize the number of transmissions in the two rounds. Additionally, we consider the potential impact of user collusion, where up to T users may collude with the server. This requires the transmissions to meet stricter security constraints, ensuring that the server cannot learn anything beyond the aggregated model updates, even if it colludes with any set of up to T users. For this more challenging problem, we propose schemes that ensure secure aggregation and achieve the capacity region when S∈{2}∪[K−U+1:K−T] . Experimental results conducted on Tencent Cloud also show that the proposed secure aggregation schemes improve the model aggregation time compared to the benchmark scheme.more » « lessFree, publicly-accessible full text available November 1, 2026
An official website of the United States government

Full Text Available